The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Open Source Outclassing Home Router Vendor’s Firmware

Share:

I’ve had an interesting new experience these last few months.  I was faced with having to return a home wireless router again and trying a different model or brand, or try an open source firmware replacement. If one is to believe reviews on sites like Amazon and Newegg, all home wireless routers have significant flaws, so the return and exchange game could have kept going on for a while.  The second Linksys device I bought (the most expensive on the display!) had the QoS features I wanted but crashed every day and had to be rebooted, even with the latest vendor-provided firmware.  It was hardly better than the Verizon-provided Westell modem, which had to be rebooted sometimes several times per day despite having simpler firmware. That was an indication of poor code quality, and quite likely security problems (beyond the obvious availability issues). 

I then heard about DD-WRT, an alternative firmware released under the GPL.  There are other alternative firmwares as well, but I chose this one simply because it supported the Linsys router;  I’m not sure which of the alternatives is the best.  For several months now, not only has the device demonstrated 100% availability with v.24 (RC5), but it supports more advanced security features and is more polished.  I expected difficulties because it is beta software, but had none.  Neither CERIAS or I are endorsing DD-WRT, and I don’t care if my home router is running vendor-provided or open source firmware, as long as it is a trustworthy and reliable implementation of the features I want.  Yet, I am amazed that open source firmware has outclassed firmware for an expensive (for a home router) model of a recognized and trusted brand.  Perhaps home router vendors should give up their proprietary, low-quality development efforts, and fund or contribute somehow to projects like DD-WRT and install that as default.  A similar suggestion can be made if the software development is already outsourced.  I believe that it might save a lot of grief to their customers, and lower the return rates on their products.

Firefox’s Super Cookies

Share:

Given all the noise that was made about cookies and programs that look for “spy cookies”, the silence about DOM storage is a little surprising.  DOM storage allows web sites to store all kinds of information in a persistent manner on your computer, much like cookies but with a greater capacity and efficiency.  Another way that web sites store information about you is Adobe’s Flash local storage;  this seems to be a highly popular option (e.g., youtube stores statistics about you that way), and it’s better known.  Web applications such as pandora.com will even deny you access if you turn it off at the Flash management page.  If you’re curious, see the contents in “~/.macromedia/Flash_Player/#SharedObjects/”, but most of it is not human readable. 
I wonder why DOM storage isn’t used much after being available for a whole year;  I haven’t been able to find any web site or web application making use of it so far, besides a proof of concept for taking notes.  Yet, it probably will be (ab)used, given enough time.  There is no user interface in Firefox for viewing this information, deleting it, or managing it in a meaningful way.  All you can do is turn it on or off by going to the “about:config” URL, typing “storage” in the filter and set it to true or false.  Compare this to what you can do about cookies…  I’m not suggesting that anyone worry about it, but I think that we should have more control over what is stored and how, and the curious or paranoid should be able to view and audit the contents without needing the tricks below.  Flash local storage should also be auditable, but I haven’t found a way to do it easily.

Auditing DOM storage.  To find out what information web sites store on your computer using DOM storage (if any), you need to find where your Firefox profile is stored.  In Linux, this would be “~/.mozilla/firefox/”.  You should find a file named “webappsstore.sqlite”.  To view the contents in human readable form, install sqlite3;  in Ubuntu you can use Synaptic to search for sqlite3 and get it installed.  Then, the command:
echo ‘select * from webappsstore;’ | sqlite3 webappsstore.sqlite

will print contents such as (warning, there could potentially be a lot of data stored):
cerias.purdue.edu|test|asdfasdf|0|homes.cerias.purdue.edu

Other SQL commands can be used to delete specific entries or change them, or even add new ones.  If you are a programmer, you should know better than to trust these values!  They are not any more secure than cookies. 

Speculations on Teaching Secure Programming

Share:

I have taught secure programming for several years, and along the way I developed a world view of how teaching it is different from teaching other subject matters.  Some of the following are inferences from uncontrolled observations, others are simply opinions or mere speculation.  I expose this world view here, hoping that it will generate some discussions and that flaws in it will be corrected. 

As other fields, software security can be studied from several different aspects, such as secure software engineering, secure coding at a technical level, architecture, procurement, configuration and deployment.  Similarly to other fields, effective software security teaching depends on the audience—its needs, its current state and capabilities, and its potential for learning.  Learning techniques such as repetition are useful, and students can ultimately benefit from organized, abstracted thought on the subject.  However, teaching software security is different from teaching other subjects because it is not just teaching facts (data), “how to” (skills) and theories and models (knowledge), but also a mindset and the capability to repeatably derive and achieve a form of wisdom in various, even new situations.  It’s not just a question of the technologies used or the degree of technological acumen, but of behavioral psychology, economy, motivation and humor.

Behavioral Psychology— Security is somewhat of a habit, an attitude, a way of thinking and life.  You won’t become a secure programmer just because you learned of a new vulnerability, exploit or security trick today, although it may help and have a cumulative effect.  Attacking requires opportunistic, lateral, experimental thinking with exciting rewards upon success.  It somewhat resembles the capability to create humor by taking something out of the context for which it was created and subjecting it to new, unexpected conditions.  I am also surprised sometimes by the amount of perseverance and dedication attackers demonstrate.  Defending requires vigilance and a systematic, careful, most often tedious labor and thought, which are rewarded slowly by “uptime” or long-term peace.  They are different, yet understanding one is a great advantage to the other.  To excel at both simultaneously is difficult, requires practice and is probably not achievable by everyone.  I note that undergraduate computer science rewards passing tests, including sometimes provided software tests for assignments, which are closer to immediate rewards upon success or immediate failure, with no long-term consequences or requirements.  On top of that, assignments are most often evaluated solely on achieving functionality, and not on preventing unintended side-effects or not allowing other things to happen.  I suspect that this produces graduates with learned behaviors unfavorable to security.  The problem with behaviors is that you may know better than what you’re doing, but you do it anyways.  Economy may provide some limited justification.

Economy—Many people know that doing things securely is “better”, and that they ought to, but it costs.  People are “naturally optimizing” (lazy)—they won’t do something if there’s no perceived need for it, or if they can delay paying the costs or ultimately pay only the necessary ones (“late security” as in “late binding”).  This is where patches stand;  vulnerability disclosures and patches are remotely possible costs to be weighted against the perceived opportunity costs of delays and additional production expenses.  Isolated occurrences of exploits and vulnerability disclosures may be dismissed as bad luck, accidents or something that happens to other projects.  An intense scrutiny of some works may be necessary to demonstrate to a product’s team that their software engineering methods and security results are flawed.  There is plenty of evidence that these attempts at evading costs don’t work well and often backfire. 
Even if change is desired, students can graduate with negligible knowledge of the best practices presented in the SOAR on Software Security Assurance 2007.  Computer science programs are strained by the large amount of knowledge that needs to be taught; perhaps software engineering should be spun off, just like electrical engineering was spun off from physics.  Companies that need software engineers, and ultimately our economy, would be better served by that than by getting students that were just told to “go and create a program that does this and that”.  While I was revising these thoughts, “Crosstalk” published some opinions on the use of Java for teaching computer science, but the title laments “where are the software engineers of tomorrow?”  I think that there is just not enough teaching time to educate people to become both good computer scientists and software engineers, and the result is something that satisfies the need for neither.  Even if new departments aren’t created, two different degrees should probably be offered.

Motivation—For many, trying to teach software security will be in one ear, out the other unless consequences are demonstrated.  Most people need to be shown the exploits that a flaw enables, to believe that it is a serious flaw.  This resembles how a kid may ignore warnings about burns and hot things until a burn is experienced.  Even as teenagers and adults, every summer some people have to re-learn how sunscreen is needed, and the possibility of skin cancer is too remote a consideration for others.  So, security teaching needs to contain a lot of anecdotes and examples of bad things that happened.  I like to show real code in class and analyze the mistakes that were made;  that approach seems to get the interest of undergraduates.  At a later stage, this will evolve from “security prevents bad things” to “with security you can do this safely”.  Actualizing secure programming can make it even more interesting and exciting, by discussing current events in class.

Repetition—Repeated experiences reinforce learning.  Security-focused code scanners repeat and reinforce good coding practice, as long as the warnings are not allowed to be ignored.  Code audits reinforce the message, this time coming from peers, and so result in peer pressure and the risk of shame.  They are great in a company, but I am ambivalent about using code audits by other students, due to the risk of humiliation—humiliation is not appropriate while learning, for many reasons.  Also, the students doing the audit may not be competent yet, by definition, and I’m not sure how I would grade the activity.  Code audits by the teacher do not scale well.  This leaves scanners.  I have been looking into it and I tried some commercial code scanners, but what I’ve seen are systems that are unmanageable for classroom use and don’t catch some of the flaws I wish they would. 

Organization and abstraction—Whereas showing exploits and attacks is good for the beginner, more advanced students will want to move away from black lists of things not to do (e.g., “Deadly Sins”) to good practices, assurance, and formal methods.  I made a presentation on the subject almost two years ago.

In conclusion, teaching secure programming differs from typical subject matters because of how the knowledge is utilized;  it needs to change behaviors and attitudes;  and it benefits from different tools and activities.  It is interesting in how it connects with morality.  Whereas these characteristics aren’t unique in the entire body of human knowledge, they present interesting challenges.

Confusion of Separation of Privilege and Least Privilege

Share:

Least privilege is the idea of giving a subject or process only the privileges it needs to complete a task.  Compartmentalization is a technique to separate code into parts on which least privilege can be applied, so that if one part is compromised, the attacker does not gain full access.  Why does this get confused all the time with separation of privilege?  Separation of privilege is breaking up a *single* privilege amongst multiple, independent components or people, so that multiple agreement or collusion is necessary to perform an action (e.g., dual signature checks).  So, if an authentication system has various biometric components, a component that evaluates a token, and another component that evaluates some knowledge or capability, and all have to agree for authentication to occur, then that is separation of privilege.  It is essentially an “AND” logical operation;  in its simplest form, a system would check several conditions before granting approval for an operation.  Bishop uses the example of “su” or “sudo”;  a user (or attacker of a compromised process) needs to know the appropriate password, and the user needs to be in a special group.  A related, but not identical concept, is that of majority voting systems.  Redundant systems have to agree, hopefully outvoting a defective system.  If there was no voting, i.e., if all of the systems always had to agree, it would be separation of privilege.  OpenSSH’s UsePrivilegeSeparation option is *not* an implementation of privilege separation by that definition, it simply runs compartmentalized code using least privilege on each compartment.

ReAssure Version 1.01 Released

Share:

As the saying goes, version 1.0 always has bugs, and ReAssure was no exception.  Version 1.01 is a bug-fix release for broken links and the like;  there were no security issues.  Download the source code in Ruby here, or try it there.  ReAssure is the virtualization (VMware and UML) experimental testbed built for containment and networking security experiments.  There are two computers for creating and updating images, and of course you can use VMware appliances.  The other 19 computers are hooked to a Gbit switch configured on-the-fly according to the network topology you specified, with images being transfered, setup and started automatically.  Remote access is through ssh for the host OS, and through NX (think VNC) or the VMware console for the guest OS.